home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
EnigmA Amiga Run 1998 July
/
EnigmA AMIGA RUN 29 (1998)(G.R. Edizioni)(IT)[!][issue 1998-07 & 08].iso
/
recent
/
iib122.lha
/
IIB
/
Threads
/
Alpha_Channel
< prev
next >
Wrap
Internet Message Format
|
1998-02-16
|
15KB
Date: Thu, 16 Oct 1997 11:57:01 +0400
From: Charles Blaquière <blaq@INTERLOG.COM>
Subject: Re: [IML] IFW: Alpha Channel
<< original Subject whas the question for a Alpha Channel support in
imagine >>
Mike Tidwell wrote:
>
> 4. Depth Alpha - this is a separate 8 bit grayscale file that renders the
> picture in shades of gray. For example the deeper things are in the scene
> depends on how dark they are.
Actually, there's an Imagine texture called ZBuffer.itx, which you can
apply to all objects in a scene. It does require setting up a duplicate
project and re-texturing your objects, but at least the ability to
output a Z buffer is already in there.
-------------------------------------------------------------------
<< Here are more postings about alpha channels. Very interesting >>
-------------------------------------------------------------------
Date: Fri, 16 Jan 1998 13:45:13 +0400
From: Charles Blaquiere <blaq@INTERLOG.COM>
Subject: Re: [IML] IMPULSE: alpha channel (was: IFW:Stuff)
mike halvorson wrote:
>
> As for alpha channel, as I said full support will be in 2.0, now why
> dont you tell me and all the others what you think this is, so that in
> advance of the release of the software, we can make sure that what we
> put in and what you expect are clear for all to see and comment on.
>
> What do you think you can do with this feature, how are you going to use
> it, what controls do you think should be a part of this.
OK Mike, I'll bite. I see alpha channel as a two-pronged issue: input
and output.
INPUT
-----
I'd like to see Imagine be able to read brushmap images that have a
built-in alpha channel or transparency layer; this would essentially
make the Genlock checkbox obsolete. In addition to enabling varying
levels of transparency in different areas of the same brushmap, it would
also give us fully antialiased edges. (Does anyone still create text in
a paint program without anti-aliased edges? Then why should we still
have to live with an all-or-nothing, 1-bit Genlock?)
Judging by Photoshop's Save As dialog, it seems as if the file formats
that support more than three 8-bit channels are Photoshop, PICT, Pixar,
Raw, Targa, and TIFF. Imagine already supports Targa and TIFF, so I
envision adding alpha channel support to these two formats.
In Imagine's brushmap Attributes dialog, on the General tab, I envision
a new control, next to Mix/Morph, called "Alpha Channel", composed of a
text field and Browse button, just like the Subgroup and Tacking State.
It would be enabled (i.e. not greyed out) whenever the brushmap was a
Targa or TIFF file. Clicking on the Browse button would bring up a list
of all the channel names in the file, from which a user could select
which contains alpha data.
If the field is disabled, or if it's enabled and blank, Imagine will
apply the brushmap using the Mix/Morph value only. If an alpha channel
has been selected, then Imagine will use (Mix/Morph value)*(Alpha
Channel pixel) to define the strength with which the current brushmap
pixel will be applied. This is important: even if I use an alpha
channel, I still want the global control offered by Mix/Morph, to fade a
brushmap in and out, for example.
OUTPUT
------
When rendering images, I'd like Imagine to output alpha channel
information. This would be controlled via the Project/Render dialog's
Image Files tab. There would be a checkbox called "Save alpha channel",
just like the one for "Save Image Files"; an "Inverse Video"-type radio
button; and a radio button to select whether to save alpha data as an
additional channel inside each rendered image, or as a separate set of
image files:
[X] Save Alpha Channel White is: (*) Fully Transparent
( ) Fully Opaque
(*) Additional Channel Inside Image:
Channel Name:
[Opacity______________]
( ) Separate Alpha Image File:
Filename(s)
[_________________________] [Browse]
File Type
[Windows BMP 24 (.BMP)] [V]
[X] Don't Render Background
All these controls would be greyed out unless the File Type (in the main
"Save Image Files" section) was Targa, TIFF, or Photoshop -- the output
formats supported by Imagine, which allow additional channels. "Channel
Name" would be a text field, prefilled with the name "Opacity". The
"Separate File" controls are just like those in the main "Save Image
Files" section. And the "White is:" radio button determines whether an
alpha channel value of 255 represents full transparency (like Imagine's
Filter mapping) or full opacity (like Photoshop's Layer Masks). The
"Don't Render Background" checkbox is discussed below.
As far as the output alpha data itself, it should be the aggregate
transparency of everything in the scene at a certain pixel, disregarding
the Globals Backdrop option. Even if a Backdrop is specified, the alpha
channel should consider it transparent. (If the user creates a backdrop
of their own by adding a large plane or sphere to the scene, it's an
Imagine object and that's a different story)
By aggregate transparency, I mean that the transparency value of each
object face present in the pixel being evaluated, must be multiplied.
For example:
+--------+
| B | A
| +----------------+
| | D | |
| | | |
+---|----+ C |
| |
+----------------+
Let's assume the square object is 50% transparent, and the rectangle is
30% transparent. In area A, no objects are present; transparency is
100%. Transparency is 50% in area B, and 30% in area C. In area D, the
objects overlap, and transparency is (50%)*(30%) = 15%. It's quite
simple, really.
This takes care of the alpha data. What about the RGB color data? When
rendering with an alpha channel, you obviously have compositing in mind,
and that changes the rules regarding RGB value.
I propose the following: when rendering for compositing purposes, it's
preferable NOT to blend the color of objects with that of the
background. The RGB value saved in the render file should be the "pure"
object color without a drop of background mixed in, regardless of
aggregate transparency. The reason is simple: transparency is addressed
in the alpha channel; the RGB channels are there to record the
unadulterated object color, without regard for transparency. The best
way to convince you why this is important, is by example.
Let's say you render a semitransparent, black-and-white checkered
sphere, to be composited over a variety of scenes. Because of its
transparency, the sphere will pick up whatever background color you used
in Imagine; if you used red, the sphere will be dark red and light pink;
if it's blue, the sphere will have a blue tint; and so on.
"What about rendering on black, or white, or medium grey?" Well, if you
render on black, the black areas will be OK, but the white areas will
become medium grey due to the background. When you composite this
object, you'll have a black-and-grey sphere flying around your
composition. If you render on white, then it's the black areas that will
render as medium grey, and again your sphere will be unacceptable. If
you render on medium grey, then both the white and black parts will get
some medium grey mixed in, making your object a dark-and-light grey
sphere.
In all cases, blending the object color with the Imagine background
brings undesirable side effects. This is why I propose the "Don't render
background" checkbox: when checked, it prevents background color (from
the Backdrop image or Zenith/Horizon colors) from being blended into the
rendered image. The result will be pure, unadulterated object colors
that will composite cleanly on any project.
And, if you think that this only affects semitransparent objects and you
can live with rendering only totally opaque objects, remember one thing.
Edge pixels have various levels of transparency (if alpha support is
properly implemented), so even opaque objects will get some background
color polluting their edges. You'll get ugly edge artifacts when
compositing, similar to those seen on the Web when GIF icons were
created on a background that doesn't match that of the web page on which
they were placed.
Frankly, I don't see a point in blending the Imagine background into
your objects, but I prefer having a checkbox to control this behavior,
just in case there's something I missed. Of course, the default value
should be "Don't Render Background" so that users always do the right
thing, _unless_ they voluntarily uncheck the box.
----------------------------------
Date: Fri, 16 Jan 1998 21:47:55 +0400
From: Charles Blaquiere <blaq@INTERLOG.COM>
Stephen G. wrote:
>
> Charles Alpha Channel request and information is exactly what I would like
> to see in Imagine. I couldn't have explained it better myself. I am only
> restating this because I want to add my vote here.
Thanks for the vote of confidence, Stephen. I do have a question for IML
members who have had real-life experience with some of the other 3D
software out there: do they actually implement my "no background"
rendering method when alpha channels come into play? After all, there's
a possibility that all the software in the world pollutes
semitransparent (and antialiased edge) pixels with whatever was in the
background at render time, and nobody's cried foul yet. I find it
doubtful, though; if I were a full-time professional user, and my
software left ugly color artifacts, I'd be crying bloody murder. (Oops,
there's the "B" word %^S )
----------------------------------
Date: Sat, 17 Jan 1998 11:30:50 +0400
From: Charles Blaquiere <blaq@INTERLOG.COM>
Paul Wehner wrote:
>
> I think an alpha channel option in Imagine would be very useful for
> some users. I think that the majority of people probably wouldn't make
> full use of it.
I agree, but it's the kind of thing that might become more popular than
we may think; for example, many of us might start to render animations
in layers, allowing us to apply a quick Gaussian blur to the background
for more realism. (It's so much faster to do a 2D blur in Photoshop than
to use Imagine's full-fledged, 3D-aware blur)
Oh, and speaking of which, there's a suggestion I forgot to include in
my initial post: along with a new section for "save alpha channel"
controls, there could be another one for "save Z-buffer". (That's the
standard name given, in CG circles, to the array of numbers that
describe the relative distance of objects to the camera, for each pixel.
In Imagine, this lies along the camera's Y axis, but historically, this
has been labelled as Z) To avoid confusion, I'd call it a "depth
channel" instead:
[X] Save Depth Info White is: (*) Closest to camera
( ) Furthest from camera
Depth Range: from [0_______] Imagine Units
to [32767___] Imagine Units
(*) Additional Channel Inside Image:
Channel Name:
[Depth________________]
( ) Separate Depth Image File:
Filename(s)
[_________________________] [Browse]
File Type
[Windows BMP 24 (.BMP)] [V]
Again, you have an "inverse video"-type control to determine whether
depth is saved as a black-to-white or white-to-black greyscale. There
are additional controls to set the range represented by this range;
anything outside that range would be clamped to black or white. You
could shorten the "to" distance, for example, if your scene basically
fit into a 5000-unit radius from the camera; this would give you more
precision for nearer objects.
I only suggest this additional functionality if its implementation is
practical, i.e. if this would be a trivial development effort on the
part of Impulse. I feel even less people would use this data than the
alpha channel, but if we're lucky, at render time, Imagine has these
numbers just ready to be written to disk.
Two ways to use this information when compositing: (1) to add elements
in your composition that will move in front of parts of the rendered
scene, but behind others; (2) to apply a different degree of blurring to
each pixel, based on its Z-buffer value -- you'd get DOF effects in a
fraction of the time it takes in Imagine.
> I'd rather Impulse spend resources on fixing current small bugs and
> adding new features.
I have always thought that Impulse's single highest priority should be
to squash bugs.
> I think that anyone who absolutely
> understands the need for alpha channel in their work probably already
> has some compositing software like after effects and digital fusion.
Yes, but you're missing the point: without alpha channel information,
these people have no easy way to composite Imagine renders on top of
other elements. They have the software, but are missing an important
piece of the puzzle.
----------------------------------
Date: Sun, 18 Jan 1998 10:51:52 -0600
From: "Stephen G." <sgiff@AIRMAIL.NET>
At 09:45 PM 1/17/98 -0500, you wrote:
>
>I've been creating matte elements and using matte elements for compositing
>with Imagine or Imagine renders for years. I've composited actors in 3D
>generated sets right in Imagine utilizing color-maps and synchronized filter-
>maps with pixel perfect registration with flawless results. And I've
>generated matte elements to cut out Imagine renders to be composited with
>imported live-action in Digital On-line suites, After EFX or Premiere. Alpha
>Channels may make this simpler but the capability is present now with a
>seperate scanline render and the lights turned off isolating what you want to
>composite.
>
You can create a separate alpha channel by turning off lights in your scene
and rendering every object as 100%bright white. This however does not take
into account transparent objects nor does it allow any visible lights or
fog objects unless you leave them white as well. You still will need a way
to composite the alpha channel render you have made with your original
targa files to make true 32 bit alpha channel frames. Mark Willis has
written a utility to do this but it still would be much easier to do if it
were a built in feature in Imagine. The other thing I wanted to say is
that no external program can create an alpha channel from any image
automatically. It has to be done inside Imagine and that is why it's
important. Sure some of you will say that you can manually create an alpha
channel in Photoshop with some hard work depending upon the image but for
video frames this would be far to time consuming.